484 research outputs found

    Constrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling

    Get PDF
    We consider the problem of learning a low-dimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of the training samples using sparse synthesis coefficients. This famous sparse model has a less well known counterpart, in analysis form, called the cosparse analysis model. In this new model, signals are characterised by their parsimony in a transformed domain using an overcomplete (linear) analysis operator. We propose to learn an analysis operator from a training corpus using a constrained optimisation framework based on L1 optimisation. The reason for introducing a constraint in the optimisation framework is to exclude trivial solutions. Although there is no final answer here for which constraint is the most relevant constraint, we investigate some conventional constraints in the model adaptation field and use the uniformly normalised tight frame (UNTF) for this purpose. We then derive a practical learning algorithm, based on projected subgradients and Douglas-Rachford splitting technique, and demonstrate its ability to robustly recover a ground truth analysis operator, when provided with a clean training set, of sufficient size. We also find an analysis operator for images, using some noisy cosparse signals, which is indeed a more realistic experiment. As the derived optimisation problem is not a convex program, we often find a local minimum using such variational methods. Some local optimality conditions are derived for two different settings, providing preliminary theoretical support for the well-posedness of the learning problem under appropriate conditions.Comment: 29 pages, 13 figures, accepted to be published in TS

    Compressive and Noncompressive Power Spectral Density Estimation from Periodic Nonuniform Samples

    Get PDF
    This paper presents a novel power spectral density estimation technique for band-limited, wide-sense stationary signals from sub-Nyquist sampled data. The technique employs multi-coset sampling and incorporates the advantages of compressed sensing (CS) when the power spectrum is sparse, but applies to sparse and nonsparse power spectra alike. The estimates are consistent piecewise constant approximations whose resolutions (width of the piecewise constant segments) are controlled by the periodicity of the multi-coset sampling. We show that compressive estimates exhibit better tradeoffs among the estimator's resolution, system complexity, and average sampling rate compared to their noncompressive counterparts. For suitable sampling patterns, noncompressive estimates are obtained as least squares solutions. Because of the non-negativity of power spectra, compressive estimates can be computed by seeking non-negative least squares solutions (provided appropriate sampling patterns exist) instead of using standard CS recovery algorithms. This flexibility suggests a reduction in computational overhead for systems estimating both sparse and nonsparse power spectra because one algorithm can be used to compute both compressive and noncompressive estimates.Comment: 26 pages, single spaced, 9 figure

    Inexact Gradient Projection and Fast Data Driven Compressed Sensing

    Get PDF
    We study convergence of the iterative projected gradient (IPG) algorithm for arbitrary (possibly nonconvex) sets and when both the gradient and projection oracles are computed approximately. We consider different notions of approximation of which we show that the Progressive Fixed Precision (PFP) and the (1+Ï”)(1+\epsilon)-optimal oracles can achieve the same accuracy as for the exact IPG algorithm. We show that the former scheme is also able to maintain the (linear) rate of convergence of the exact algorithm, under the same embedding assumption. In contrast, the (1+Ï”)(1+\epsilon)-approximate oracle requires a stronger embedding condition, moderate compression ratios and it typically slows down the convergence. We apply our results to accelerate solving a class of data driven compressed sensing problems, where we replace iterative exhaustive searches over large datasets by fast approximate nearest neighbour search strategies based on the cover tree data structure. For datasets with low intrinsic dimensions our proposed algorithm achieves a complexity logarithmic in terms of the dataset population as opposed to the linear complexity of a brute force search. By running several numerical experiments we conclude similar observations as predicted by our theoretical analysis

    Deep Unrolling for Magnetic Resonance Fingerprinting

    Get PDF
    Magnetic Resonance Fingerprinting (MRF) has emerged as a promising quantitative MR imaging approach. Deep learning methods have been proposed for MRF and demonstrated improved performance over classical compressed sensing algorithms. However many of these end-to-end models are physics-free, while consistency of the predictions with respect to the physical forward model is crucial for reliably solving inverse problems. To address this, recently [1] proposed a proximal gradient descent framework that directly incorporates the forward acquisition and Bloch dynamic models within an unrolled learning mechanism. However, [1] only evaluated the unrolled model on synthetic data using Cartesian sampling trajectories. In this paper, as a complementary to [1], we investigate other choices of encoders to build the proximal neural network, and evaluate the deep unrolling algorithm on real accelerated MRF scans with non-Cartesian k-space sampling trajectories.Comment: Tech report. arXiv admin note: substantial text overlap with arXiv:2006.1527

    Compressible Distributions for High-dimensional Statistics

    Get PDF
    We develop a principled way of identifying probability distributions whose independent and identically distributed (iid) realizations are compressible, i.e., can be well-approximated as sparse. We focus on Gaussian random underdetermined linear regression (GULR) problems, where compressibility is known to ensure the success of estimators exploiting sparse regularization. We prove that many distributions revolving around maximum a posteriori (MAP) interpretation of sparse regularized estimators are in fact incompressible, in the limit of large problem sizes. A highlight is the Laplace distribution and ℓ1\ell^{1} regularized estimators such as the Lasso and Basis Pursuit denoising. To establish this result, we identify non-trivial undersampling regions in GULR where the simple least squares solution almost surely outperforms an oracle sparse solution, when the data is generated from the Laplace distribution. We provide simple rules of thumb to characterize classes of compressible (respectively incompressible) distributions based on their second and fourth moments. Generalized Gaussians and generalized Pareto distributions serve as running examples for concreteness.Comment: Was previously entitled "Compressible priors for high-dimensional statistics"; IEEE Transactions on Information Theory (2012
    • 

    corecore